Formulas for the variance of the sample mean in finite state markov processes

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

study of cohesive devices in the textbook of english for the students of apsychology by rastegarpour

this study investigates the cohesive devices used in the textbook of english for the students of psychology. the research questions and hypotheses in the present study are based on what frequency and distribution of grammatical and lexical cohesive devices are. then, to answer the questions all grammatical and lexical cohesive devices in reading comprehension passages from 6 units of 21units th...

Mean-Variance Optimization in Markov Decision Processes

We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for oth...

متن کامل

the test for adverse selection in life insurance market: the case of mellat insurance company

انتخاب نامساعد یکی از مشکلات اساسی در صنعت بیمه است. که ابتدا در سال 1960، توسط روتشیلد واستیگلیتز مورد بحث ومطالعه قرار گرفت ازآن موقع تاکنون بسیاری از پژوهشگران مدل های مختلفی را برای تجزیه و تحلیل تقاضا برای صنعت بیمه عمر که تماما ناشی از عدم قطعیت در این صنعت میباشد انجام داده اند .وهدف از آن پیدا کردن شرایطی است که تحت آن شرایط انتخاب یا کنار گذاشتن یک بیمه گزار به نفع و یا زیان شرکت بیمه ...

15 صفحه اول

Estimating Variance of the Sample Mean in Two-phase Sampling with Unit Non-response Effect

In sample surveys, we always deal with two types of errors: Sampling error and non-sampling error. One of the most common non-sampling errors is nonresponse. This error happens when some sample units are not observed or viewed but they do not answer some of the questions. The complete prevention of this error is not possible, but it can be significantly reduced. The non-response causes bias and ...

متن کامل

Finite-horizon variance penalised Markov decision processes

We consider a finite horizon Markov decision process with only terminal rewards. We describe a finite algorithm for computing a Markov deterministic policy which maximises the variance penalised reward and we outline a vertex elimination algorithm which can reduce the computation involved.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Statistical Computation and Simulation

سال: 1980

ISSN: 0094-9655,1563-5163

DOI: 10.1080/00949658008810424